Welcome to today's KM World webinar, brought to you by Coveo. I'm Mary d Ojala, editor in chief at KM World magazine, and I will be the moderator for today's broadcast. Our presentation today is titled Clean Data, Smarter AI, the Impacts of Kilometers on Generative AI Effectiveness. Now before we get started, I want to explain how you can be a part of this broadcast. There will be a question and answer session, so if you have a question during the presentation, just type it into the question box provided and click the submit button. We will try to get to as many questions as possible, but if your question has not been answered during the show, you will receive an email response within a few days. And now to introduce our speakers for today, Daniel z Shapiro, associate director, enterprise knowledge and content management, Organon, Juanita Oguin, senior director of product marketing platform solutions, Coveo. And now let me pass the Yvette over to Juanita Oguin, senior director of product marketing platform solutions, Coveo. Thanks, Marydee, and hi, everyone. Excited to be here today, but even more excited that we'll get to hear from Dan and his amazing story of digital transformation at Organon. Now I probably don't need to tell you, but we've really almost gone full cycle on the Gen AI hype, curve. Just yesterday and even over the last year and a half, we've really been told by execs board members, and everyone that you need to be implementing GenAI. This is the way to get transformational business results. And, really, I wanted to start our conversation here to give you an update on where things are today. So as you can see, there's, a lot of questions now, and we've really come full circle around whether GenAI is gonna live up to its promises. There's a call for customers and enterprises rightfully so, asking, vendors to show them, show them proof, show them that GenAI works. We see, several leading, companies that are really driving this AI and GenAI, you know, the advancements really, kind of losing some of their stock value. And the reason for that is because there's a big question on whether the GenAI hype is going to live up to its hype. Our company is going to be able to get the results from their investments. And maybe we shouldn't be so surprised because there's always been a hype cycle with any new technology. We should not be surprised. Gen AI is just the same. Right? And we can see that our friends at Gartner also predicted back in August of twenty twenty three that more than fifty percent of enterprises that were starting to build their own large large large language models would abandon those, due to complexity, technical debt, and a number of other reasons. So, really, we we shouldn't be super surprised. Building this is hard, and it begs the question of why is it so difficult? Why is it so complex to implement effective AI and GenAI? And, unfortunately, I'm here to tell you there isn't one silver bullet, and you will hear that from Dan as well. There is not one thing that is going to help you do this successfully. Rather, it's a series of components and elements that work hand in hand together, and I'll walk you through those quickly. The first is, of course, needing that AI expertise. We're we're talking about large complex systems that take, real experts and decades of, you know, innovation to implement successfully, not to mention the millions of dollars to to do this, in in a production ready way. Also, we're not at the point where we've acknowledged that search and AI and search in Gen AI in particular go very much hand in hand. We all know that saying garbage in garbage out. Well, here, this is where having deep search relevancy or the ability to filter rank and factor out the bad stuff really comes into play. There's also need to be able to innovate quickly. We talked about those large language models that enterprises are trying to build. Lots of companies are trying to build on their own, but the fact of the matter is that every day there's a new a new better higher performing underlying LLM that, goes into making these generative answering systems effective. And so if you're building, you know, that's a that gets you into a little bit of maintenance. And are you able to really kind of be agile and swap these things out to stay ahead of the curve? Then there's a question of enterprise scalability. We know that everyone is dabbling in this setting up proof of concepts, really trying to understand how this technology works. But oftentimes, people get stuck in having a proof of concept that really can't translate to the enterprise. Right? Having that scalability, repeatability, to work across different use cases and not just in a in a silo proof of concept, project. There's also the consideration that, of course, there's existing platforms out there, the big tech. We know it. We see it. CRM, service management applications, and so on. Of course, all big platforms are rolling out AI and GenAI into their technology to make it useful for existing customers. But oftentimes, you know, this kinda leads to a little bit of tech lock in being stuck with the content that you can actually make searchable within that particular platform. So there's a need to think of an agnostic approach here as well. And, of course, security. You wanna make sure people are only getting access to the things that they are supposed to see and nothing more. So that's another very important consideration for every company out there. You also have that self learning or closed loop system. Right? So people are searching, clicking, getting answers. How are you able to use that information to feed your systems to learn and improve constantly over time? And last but not least, the ability to measure performance, success, have those KPIs so that you can see whether things are trending in the right way or not. Now I'm sure there's a lot here that I did not cover, but that's just a few of the key components and elements that go into creating an effective Gen AI system. So it's not one thing. It's a series of things to make this work. And here at Coveo, we've really been working on several of those things since two thousand and five, trying to solve for the hardest and toughest content management and knowledge discovery problems. We're proud to say we have over thirty live generative answering deployments today with leading enterprises, and we do that with our one platform that can be applied to multiple use cases. Now I won't take up too much time because I know we're all excited here to speak to Dan, but just wanted to remind everyone and also to share, like, we take security seriously. You can see some of the certifications we have including being HIPAA compliant. We are constantly rated leaders by Gartner, Forrester, IDC in our space, and we partner with the leading technology ecosystems out there, everything from Adobe to Salesforce, SAP, part of the mock alliance newly, and more. And so we started with, you know, there's a lot of hype. Is this thing real? Does it work? Is it yielding results? What better way to answer that question than to invite Dan to the conversation? So, Dan, thank you for joining us. Thank you for, agreeing to share your story. Dan, I actually met two years ago at KAM World. So, it's great to see how far you've come, and you have so much to share in what time then. Absolutely. Absolutely. Happy to, contribute some of my story here. Yes. So as, Juanita said, my name is Dan Shapiro. I am the associate director managing enterprise search at Organon. Before I was with Organon, I spent a very long time at Merck, which you may have heard of more than Organon. I've been doing enterprise search in some form for the better part of the last two decades, about ten years of that also managing enterprise collaboration with SharePoint and three sixty five recently. Also, AI has been, of course, added into my portfolio along with, I'm sure, many people on this call. So I'm also managing, copilot's rollout for my company. So happy to talk about any of those topics. I think today, we'll be focusing in on mostly enterprise search and GenAI. I have spoken at KM World a number of times. I will also be at KM World, this fall. I think speaking a couple times. So if any of you happen to be there in DC, feel free to come up and say hi, and I'd love to talk about any of this stuff. So, a bit about my company first since I know Organon may not be, as familiar to a lot of people. So Organon was essentially a spin off from Merck, which is, of course, a very large pharma company. In the middle of, the pandemic a few years ago, Merck made a decision to spin off essentially its entire women's health product portfolio to create a new company that was a pharma focused on women's health issues. And we Organon is really the only pharma company whose primary mission is around women's health. Personally, with my family, with my wife, we've had enough issues that were women's health related that it's a mission that I'm a big believer in. I'm very proud to work for Organon, and I I think it's great that there were there's a company who's truly focused on these issues that oftentimes were ignored by other farmers. So, as I said, Organon's story, we were a spin off from Merck. So interesting being part of a startup that was basically about a a ten thousand person startup effectively. You know, we we came in with a lot of baggage and a lot of history from Merck, but with completely new technology and the ability to go a lot of, different directions. So at the time, though, when the spin off happened, I was managing enterprise search at Merck, and I was asked, what do you think Organon should have for enterprise search? And I made the decision to not give Organon anything. My thought was Organon should make its own decisions about search, figuring their use case may be different than what we're doing. And then I found out a few months later that I was responsible for enterprise search Organon when my job was moved over. So I essentially gave myself nothing to work with, but I actually stand by that decision completely because our use case truly was very different. In Merck, our focus was truly around just the basic enterprise search on the portal kind of experience. It didn't really go into other kinds of search, but at Organon, we're a much smaller company. And my mission and my goal is to search do all of the search for the entire company. So not just your basic portal search, but also your, scientific search, knowledge search, things like that. We cover all of it. So I'm really glad that we made the decision to, not burden ourselves with Merck's history, and we ended up having to pick a different provider. And a couple years ago, we went through the process, looked at a number number of different, search providers. We settled on Coveo. I'm very happy with that decision. The couple reasons why, one being that we are a big ServiceNow shop, not only our internal support desk, but even our enterprise portal runs on ServiceNow. Coveo has a great partnership with ServiceNow, and we found that our our time to market and dealing with some of the complexities of ServiceNow was a lot easier with Coveo. So that was a big plus there. We also saw some of the, I'd say at the time, pre AI stuff Coveo was already doing. We felt that, you know, where the market was going, that they would be well positioned for AI, and I'll talk about that more on the next few slides. But I think, again, that ended up playing itself out really well. So, yeah, that's just a couple of the reasons why we ended up, going with Coveo, and I think was a good decision. So, this slide here, a quick quick look of what our actual search page looks like. I wanted to give you guys a peek. I found a search that had, like, every single possible feature turned on. So most searches, people don't see all of these things on the screen. But I just wanted to give you guys a feel of what we're trying to do. Right? So in our search experience, basic enterprise portal search, this is the the main search that we do. We're rolling out other ones. But you can see we have the refiners on the side, and those refiners are actually using AI. So Coveo has AI technology to do dynamic refiners. It changes the order. It shows and hides in theory based upon user behavior analytics. It's pretty cool. We have our best bets in our information panel card type thing that we built to give people that kind of basic results. We have our general answering right in the middle. We just added that about a month ago. So talk about that more in the next few slides. And at the very bottom, you can even see organic results. So this is just to give you guys a sense of what we're doing. I really strive to try and make it feel like as much like Google and Amazon as possible while still, you know, being what our enterprise needs. So just to give a quick peek of of what it is. Again, most searches don't throw all of these features at the user, but I managed to find one just to show you guys all the bells and whistles. But the main topic for the day is Gen AI. So let's talk about implementing generative answering. This is something that we have just done, like, how sure almost every company out there. You know, the direction from senior leadership is AI, AI, AI. How can you get this into our world? And search is one of the first places that we've really implemented it in a, mainstream fashion for everyone at the company. And what we did is, as you saw there, we supplemented our traditional search results with a generated answer written by AI. What did we learn in that journey? So, couple of of different things that we learned is that GenAI is not a magic bullet as Juanita was saying earlier. And it doesn't cover for, the things that it doesn't let you skip steps. Right? So you still have to have good foundation content to do AI. If not, if you have bad content, you're just gonna get bad AI out at the end of it. So you really don't get to skip the steps and have that that great quality of content that you really need. See what else? Also talking about kind of some of the other stuff here. On the right side with stale content and, conditionality of content. Right? So one thing we've learned very much is that stale content, does not do it's hard for the AI tools to differentiate the two. Right? So one thing we did, we did in our our POC, our beta of the AI solution. We implemented it across some content, and it was very quick to do. It was very easy to get it up and running. But one thing that we kind of learned in that process is that it can't tell when content is stale. So the site that we we trained it on had all kinds of great policy content. It also had a folder entitled obsolete files, and it was still broad access and it was still you know, it shouldn't have been, available. Right? But it was. It was shared broadly. And when we started doing our searches, a lot of the answers look great, and then you realize the citation was pointing to an a folder called obsolete files. So one of the things we learned, AI could not figure out that those files were obsolete because from the perspective of the search tool, it can't tell the difference. That was an interesting example there. And conditionality in the HR space was another one. And this is again an issue, I think, for almost any company that has more than one location. But the GenAI tools that exist today, and I've asked this question not just to Cabello, but to many others, other vendors, they strive to find an answer. Right? Their goal of the GenAI tools is to find a single answer to your question. But when you ask a policy or HR question, oftentimes, it's a very conditional response. If I say what's my vacation schedule, all that answer is gonna be different if I'm located in the US or if I'm located in Australia or Canada or wherever. So there, what we found with the tool is that oftentimes, we would do a search and it would give us back an answer and it was citing the wrong geography. And this is something that I think the tools need to improve on. Our solution for this, by the way, was to simply not train the model on this content because the reality is we've decided it's not gonna do a good job answering it. Instead, in our organic search results, we have dynamic relevancy rules that actually do a really good job of pushing content up based upon matching the user profile country to the content. All could all stuff Coveo do really well, but we decided in this particular instance, the GenAI toolbox was causing more issues than it was actually solving. And we went back to a more old school. Let's just make the organic results work well. So those are the kind of things we discovered in there. I know that this is an area that I think Coveo is working on. I don't know, Juanita, if you wanna talk a little bit about how we're gonna add some more of the conversationality and the contextual awareness. Yeah. Obviously, context is super important. For the example you gave, you can either ask a more precise question, right, to to ask, like, you know, find me this in this country, but we don't want users to work that hard. Right? We wanna use know what we know about them and their location and and and their, profiles to serve up the right information. So we do have something in the works. I'll call it personalized answers for now, but the idea is that this is able to plug into your existing systems, into your employee profiles or customer profiles so that you're able to leverage the context from that system so that it's just more aware and able to give that more precise answer without requiring any extra work from the end user's perspective. So that's one way we're we're thinking about answering that. That's great. Yeah. I think that that kind of stuff is gonna be really important as we all look to evolve and improve what we're doing. You're right. Like, if you teach people how to be great prompt engineers and great searchers, these problems don't exist. The reality is Google has spoiled everyone. So everyone assumes they can put three words in the key box and find exactly what they want, and we all know that's not always gonna be the case. So, yeah, I think that's a a great thing that the gap can be closed on as we move forward. And one other story I forgot to tell, about the other half of the slide there about excluding certain content from the site. I wanna mention this to you before we go on to the next slide. One thing we learned is that sometimes you wanna have a site that has great content, and then there's a portion of the site that is gonna be an issue. So we trained our model on a site that was related to risk assessment, something that everyone in the company who's running a project has to do risk assessments. And it was great. It was answering questions about those. What we also noticed was that in that site, there was an entire folder that contained every risk assessment that had ever been done at Organon. And when you ask certain questions like code of conduct, for example, it happened to find some other company's code of conduct that was attached to a risk assessment. So, again, it's a case where, you know, was this a search problem? Not really. Was it a model? Not really. Was it it's a content problem. Right? The content should never been broad accessed in the first place. We adjusted the model. We adjusted the content permissions. But it's the kind of thing that search and AI are gonna find for you in a way that you have never experienced before. These problems existed for years, but they were highlighted by some of this new technology. Alright. So I wanna talk about how we measure success. I know that there was some talk about this, presentation about how we're gonna actually measure this. The reality is I have no graphics on this slide for a reason because the we are now in a world where the way that we have measured success for, I guess, the last two decades, right, of doing search projects, all of a sudden is no longer the right model. We have always measured success in a pretty basic way. Right? The user does a search, and and then they click something. And that tells us that they likely found what they wanted at a high enough macro level of data. They've had a generally good experience. And now we have just turned that model on its head by introducing GenAI. For any informational query, the user may no longer need to click on anything if they got their answer provided by a good quality generated answer. So now the model is they do a search. They read the screen. They're happy. They leave. We have no analytic to capture what just happened. And I'll tell you in our in our models Coveo the past month since we implemented Gen AI, I've seen maybe about a five percent decrease in our primary, like, search metric of how often people are clicking something. But I don't think it's because search has gotten worse. I think it's because we're we can't measure that we've gotten better. So this is a tricky one. I think the industry is really evolving here. I asked this question a lot at KMWorld too, and I think I'd be asking it again this fall as to who has figured this out. We do at a high level, the way we do analyze and measure our success, we use the essentially, Coveo has what they call their big four metrics, which is around, you know, clicks per search and clicks per session and, you know, zero result searches. We benchmark against all those. So far, we're still above all those thresholds. But, again, I think we have to think about how this is gonna change as the paradigm of search changes. We do have user feedback as well. I'll tell you guys, I have never done a search project where I had less negative feedback than I get for this project. So with Coveo, since we implemented it, we people don't call the power company to thank them when they flip the light switch and it works. Right? So no one's gonna give you positive feedback. It's very rare. But you're gonna get negative feedback if you're doing a bad job, and we don't get much. So I'm very been very happy with that side of it as well from an anecdotal perspective. But I know, Juanita, I know you've you and I have talked a little bit about, like, the ways that Kabei was trying to figure out how do we how do you measure in this new world of AI? Yeah. I have so many thoughts about this. I feel like maybe we can have our own breakout at Cameron about this topic. That would be great. But I'm in. What I would what I would say is, I'll answer it in in two different ways if that's okay. The first is I do see, success measured in three different on three different levels. I'll start with the bottom level first, which to me are, like, more of those operational or KPI type metrics that you mentioned, Dan. So, like, your search click through, maybe it's self-service success. Right? And having those type of KPI, almost like operational performance based metrics to know, is this thing working? How is it working? So it's still operational, that level first. Then I would say the second level, which you just mentioned, is the user feedback, the employee satisfaction surveys, or the employee effort scores, or NPS, whatever you use to measure. Yes, those are important too, and you're right. You often get feedback when things are bad, not when they're good, but that's another good way. And then the topmost level, which I think is always tricky, especially for internal workplace, use cases, depending on those particular use cases, is, like, the actual, ROI or, like, did you reduce cost? Did you improve productivity? Little difficult to measure in, from an employee perspective. Now if you're if you're talking to sale, you know, sales employees, they have pipeline and revenue goals, so maybe we get creative there. But I think really showing those higher level, like, cost or, you know, bottom line type metrics gets a little tricky for internal workplace use cases, but they're there for some of them. So, that's how we typically look at them, and I feel like that's a conversation that should be had. And I feel like there should also we should not underestimate the importance of, you know, a positive employee experience. Right? And sometimes I feel like that gets overlooked. Like, if people are able to find their the answer, get the context that they need, then they can move on with their lives to focus on high value added work. And if you're talking about pharma or life sciences, right, that type of work and time is super important. We don't want anyone wasting that on not being able to find what what they need. So that's how I would, you know, at least offer another way to think about it. But let's let's dive deeper at KMO too. Yeah. It sounds great. I think it's an area that, no one's really figured out completely yet, but Yeah. I think we all need to. Right? Because it's how do we tell our story of success if we can't measure or we don't know what success looks like in the in the new world. So absolutely. Exactly. So I think I just have one more kinda quick slide here talking about just sort of content cleanliness. And, you know, I I one thing I always like to say anytime I present is I would say enterprise searches. Search is a mirror, and it uncovers issues in your content. What I've learned is that GenAI is a fun house mirror. It will magnify the issues at a level that you've, like, never experienced before. The problems that we've had for years in our data that we just never knew about within a month of implementing Gen AI, you know, I've had three or four calls from executives saying, oh, hey. I found this thing. I found this thing. And, you know, usually, the answer is not fixed search or fixed AI, but simply, you know, tell someone, hey. Your content needs to be secured or, hey. Your page is missing a bunch of really obvious keywords, and it's not being used to answer certain questions. Right? So stuff like that that probably had existed for years, they are being uncovered at a really rapid rate. So I definitely would encourage people to take your time, implement this thing carefully. We spent about six, seven months rolling out our solution, just trying to kinda figure out, how to get it right. And even then, we still, you know, have have suffered a bit of just running into content issues and, you know, it it's hard to capture the amount of data you're really rolling into these things, and then you're gonna find problems. So, yeah, I just for me, I've I've really been surprised at how much GenAI has shown a spotlight on our problems, in a way that I I guess I kind of knew is coming, but it's been interesting to see it firsthand. Again, I mentioned this before between valid and stale content, And I think I even saw a question in the chat already about this. But, yeah, it just doesn't really understand the difference between the two. So, to the last point there, like, you can't control the black box very well unless you build your own LLM, which is, for most of us, very not cost effective. So the best thing you can do is control what goes into the box. So most of your job as the AI GenAI steward is making sure that you're only letting in content that's been vetted, you know, working with the site owners. I had a conversation with every single owner of every SharePoint site that we included in the mall to kinda talk to them. I have my support desk reaching out to them and helping identify stale content, dead links, etcetera. So a lot of that is really critical. Having stewardship policies, waiting people review their content periodically to assert that it's still valid. And even to that last point, we're actually looking at a new technology. This one's from, AvPoint. It's called Opus, but it's meant to identify raw content. So redundant, obsolete, and trivial content. And, essentially, the tool is trying to let us do this on a file by file level. Right? It uses AI. It uses analytics, and it tries to figure out what content you have in your SharePoint tenant that probably is causing problems or it's old or there's many versions of the same document. So it's really meant to help find those issues and, in theory, clean them up. It's something we're just testing out now, but I feel like it's got a lot of potential to be the way to close the gap between what we're doing today from a stewardship perspective and where I think we want to be in the long run where we can go on a file by file basis and identify those problem files and, you know, get people to take action on them. So I think that's the last of my slides. I think, Juanita, you had a couple of questions you wanted to fire at me and then possibly, answer some other questions from the audience if we have time. Yeah. Definitely. And for those joining us, please feel free to continue asking questions and we'll, we'll try to get to those during our time here. These next few questions are gonna go a little deeper, hopefully, or provide more context on your experience and kind of what you went through, Jan. So I'll take them one by one. The first is, why is search important? Why is that even part of the AI and GenAI conversation? Yeah. Obviously, I'm I'm biased here, as someone who's done search for twenty years. But, I I really do feel like search is the fundamental to all of this. Because when we talk about what AI is nowadays, to be honest, AI is a glorified search engine. Right? It it really is, a search engine that can just simply give you better answers. Right? Instead of giving you back a list of a hundred documents, it can summarize those documents for you and give you a much more easy to consume, easy to understand version of what the engine just gave you. But in many ways, today's AI solutions really are just search engines that happen to have a better way of doing output. And I think a lot of it is that people's models of how they expect to be able to interact with their world, especially the generations who are coming into the workplace now who grew up with Google being ubiquitous from the first day they ever went on a computer. They just expect that you're gonna be able to search and find whatever you're looking for. So I think that search is a fundamental to all of this. I think, you know, doing search for twenty years prepared me very well for the AI revolution that we're in now because the same things that we've been preaching and learning and trying to get people to do for decades is exactly what you need to do to make your AI solutions work. Okay. I didn't put this on the screen, but I'm gonna ask you this. I hope you're okay with that. Alright. Alright. This is why you didn't why why didn't you build your own search engine or tool, or why didn't you go that path? Like, the for our enterprise search, but the whole thing was scratch, basically? Yeah. It's I just don't think it's worth it. Right? The the tools that are out there in the marketplace today for what they can do, for what they cost versus the cost of developing something. Right? And then you need to have your own storage for it. I mean, having so having done on premise search, right, for many years before the advent of the cloud and all that, What it takes to actually maintain a search engine, how much storage space you need, all the processing that goes into it, it's not cheap, and you need, like, high power CPU stuff. Right? So, yeah, you can do it in the cloud, but you need, like, not cheap cloud assets. Right? This isn't like running off of some s two storage or whatever. It it you need high volume stuff. So it's just not cost effective. Right? So we never even considered it. We considered a lot of different options. I think we'll talk about that later as we get through this slide. But Yeah. We definitely never looked at the idea of let's build this whole thing from scratch, nor did we look at that from a, LLM perspective. We never wanted to build our own LLM. The cost of it is just not feasible. And I I think the companies that are investing in that, like, if they have a great use case case for it, that's fine. But the ones who are more in the experimental phase, I think it's better to leverage assets that are out there than try and build your own farms for this stuff. It's just too big. Fair enough. Alright. Next one. Why are agnostic tools important? Sure. So I know with us like I said earlier, we are doing multiple use cases. We haven't talked too much about the scientific search use case, but I'll mention that a little bit more as we go through the last bit here. But we needed tools that could work with a lot of different systems. Right? And we looked at using the built in search that we had in Microsoft search from day one. We looked at using ServiceNow's built in search. And what we learned about all of those tools is that they do their own use case fine, and then you try and integrate them with anything else, and they're lacking. There's issues. Even with Microsoft search. Right? You really have to find ways to bolster them. So rather than trying to find ways to prop up a product that was only gonna be half we wanted anyway, we'd rather find a product that was truly agnostic and to be able to be flexible and work with a lot of different, data sources. And we have quite a few. You know, we're only a year into this project, and we've already integrated with, I think, six, seven different sources. We've got more on the horizon. So, yeah, for that perspective, I think going down the road of getting a dedicated enterprise search solution, I think makes a lot of sense. You know, there's certainly cases where, I think, smaller companies, companies that only have a couple of data sources, you might be able to do what you need to do with Microsoft search or ServiceNow search or whatever platform you're running on. But the second you need this thing to be flexible and, you know, dip into lots of other systems, manage multiple security models, I I just I I did enough of it for years with Microsoft search at Merck and my old job to know that it's not it wasn't gonna work to do what we have to do here at Organon. Fair. Thank you for that. This should be a popular one for those listening in. To what extent does search but also AI. Right? We always talk AI, AI, Gen AI help with Kilometers and content cleanliness. So there's a couple ways to answer this question. Right? The first one I'd say is that I actually so I don't know that search that that search truly helps with through content cleanliness with Kilometers. I think that search mandates that you're doing good k m and doing good have you clean content because otherwise, you get bad search. Right? So if you're okay with having search that doesn't really work that well, then you don't have to do the a the Kilometers part of it. Right? You can just accept what, you know, what you get. But if you want the search and the AI to work well, you have to have a good part of that. I do think that the AI part of it, can help with some of, like, the content classification. So some of the stuff in Kilometers that was historically painful of figuring how to classify your content, you can definitely use AI to help, identify the content for you that is going to do the best and tag it and make the other steps easier. So there are pieces of the puzzle that I do think AI can enable parts of the Kilometers journey, let you skip a couple steps or at least make some of those steps a little bit less painful. But at the end of the day, I think it's more that if you wanna do search in AI well, you have to be doing the other stuff. You have to have kilometers processes. You have to have clean content. Otherwise, you just get a subpar search or subpar AI. That makes sense. And it kinda ties back to what you were saying around how GenAI, like, magnified the content or lack or gaps that existed. So that that makes sense. Yeah. It it was striking to see how quickly our problems became very visible. And my my CIO is like, what about this? And I'm like, why am I getting these? You know? But, hey. The good part is, again, with the conveyor solution being as as great as it was, the issues that were found, we fixed in hours. Right? Like, in not even not even days, hours. I was able to fix the problems in a relatively clean way while we went back to the owners of the data, and we're like, you need to fix this part of it. We could, you know, get this stuff taken care of very quickly with the way the tool works. So Amazing. Alright. How many generative answering solutions did you try? So what I'd say is that we we pretty quickly realized that what Coveo is doing with their solution was exactly what we wanted for that particular use case of just augmenting our search results. And I think there, we didn't look around too much. We definitely have we use Microsoft Copilot as I mentioned earlier. We use custom Copilots. I've talked to Chad GBT about their enterprise offerings. We have tons of other things that happen. But for the particular space of, just kind of augmenting our search results. What Coveo is offering was, number one, incredibly easy to do. As I mentioned, it was hours. Like, I had one phone call with services the services team. We figured out the query to build our model. It was built in hours. It was working. So the time to market was incredibly quick. I'll say from a cost perspective, I've talked to a lot of AI providers. What we're getting for the cost for this particular solution is very affordable compared to what I've talked to with other companies where we know stuff's expensive. I honestly think this is a solution that is simple. It does its one purpose really well, and it's not insanely expensive like a lot of AI stuff is just because of the cost of the CPU and hardware. So, yeah, we didn't spend much time looking at other ways to augment our search results. This was the obvious choice for us because we're already using, obviously. But, yeah, I I haven't for what we needed to do, it does the job perfectly. So, but we we don't have a lot of other AI solutions in play for other spots of the enterprise, just not this particular one. Fair. That makes sense. Alright. Yeah. So this one I felt I felt like was a really interesting question because you have a great experience here, which is around how do how how does the intent behind these tools impact how they get used? Or how did you select tools based off of the intent of that tool? Yeah. Now that makes that's a great question. So we definitely like I said, we have multiple search solutions, right, running off of one Coveo platform because I'm a big believer that I'd rather do a couple different solutions that do their use case really well than one solution that does a lot of things badly. You know? And just having one place to go isn't always the right answer. My my preference is really to have different tools that do meet their purposes and then have loose coupling between those tools so that when the user is on one place if we can figure out what their intent is of the user and then match that to the intent of the right system to deliver their answer, and then use APIs and use Gen AI and use all these tools, to help give them the right answer and kind of push them in the right direction. That, I think, is where we really wanna go. We're looking at this for our, our scientific search solution using AI to get an external, like, market research perspective on on the world. Right? So, rather than having to go off and index tons of scientific sources on the web, I can go off to someone who does that already, use GenAI to have them write me a summary, bring it back onto my page. And then if we figure out the user might wanna go for market research, might go for external perspective, give them the way to follow that thread off to this other system. Right? So I think that's what my general thought is is I'd rather have systems that do their purpose really well and then use search, use AI, and use other tools to kind of let people follow those things through. Things like Best Bets can help you do that. Things like panels, things like Gen AI. All of these tools we have today let you capture the user's intent, hopefully, and then direct them in the right way rather than trying to force everything to work in one system. Fair. Thank you. We have a couple more here before we move on. What is one thing you're most surprised about with generative answering? Sure. I'll I'll give you the good answer and the bad answer. So on the good side, I think that I was shocked how quickly this stuff worked and how quickly we got to market. Right? And I I'm not joking when I say you literally figure out the query you wanna run. You plug it into the model builder. You wait a couple hours. You have working Gen AI. You know? And depending upon your UI, it might take a little more time to turn it on, but, like, the time it take when it took to for us to go from, okay, we licensed this to, okay, this is functional and testable and to the point that I was able to show it to my end users, and they said, oh, this is fantastic. It was hours. Right? So, like, that that was that was shocking for me that we got it was that easy to do. So it it obviously assumes you've already got a search index, and it assumes you've already got all the stuff there. But once you have a working search solution, adding the AI on top of it was one of the easiest things we've done. And just the quality of it in general, the performance of it was really good. I mean, in in ninety five plus percent of the cases, the answers were good. We don't see a lot of hallucination, to be honest. I see more where we need to retrain the model or just how we're training it. I almost never see them think hallucinate give completely weird answers. It's just more, oh, hey. I shouldn't have trained it on that stale document. I shouldn't have trained it on this particular site, whatever, that was our learning. On the other side, I'd say the the negative surprise was I was really hopeful that because we were already adjusting our organic results based upon relevancy, that it would do a better job of some of those, like, policy searches where it was different answers based about who you were. And the reality was it wasn't doing a great job with that yet. And I think that's where, hopefully, some of the stuff we heard earlier from Juanita talking about making it more conversational, making it more contextual, I think that's kind of the what needs to happen. The technology is not there yet. So you can't just assume that because you fixed your organic results, you fixed your, you fixed your AI. Fair. And by the way, I love that you said hours. That's because you are already an existing Coveo customer. I think what we're telling, like, net new people trying to stand this up, gender demand starting up, we're saying four to six weeks. And I'm glad that you mentioned the hallucination thing because that's also by design where we'd rather show no answer if it's gonna be a bad answer. Right? So there's controls in place and by design to help reduce that level of hallucination or perhaps no hallucination. Alright. I think this is the last question, and it's what are you hoping to do next? Sure. Absolutely. Yeah. So next up, as I mentioned, our our next big thing is doing our scientific search solution. So we've been in development on this pretty much since the spring, and we are very excited to be doing our first full launch of this next week. So we're kind of in the final stages of UAT now, and we're gonna be going live with, essentially, kind of a scientific knowledge search solution. It's really focused on our r and d folks, manufacturing folks, giving them a tool to find data in the systems they care about that are part of their line of business that aren't necessarily part of the rest of the companies in the day to day work. So that solution is going live, and we're actually we're gonna be rolling out, hopefully, GenAI on top of that solution as well in about a month. That's going through testing now. This is the first time we tried to use actual kind of scientific medical content in our search tool, and we have to kind of vet from some actual scientists and, you know, researchers, is it doing a good job with that? The early results were very positive, by the way. So, initially, it looks like it is doing a good job with that kind of content, but it's a flavor of content you've never tried. You know, these are very, very large documents with deep medical details, and we're seeing, hopefully, that Coveo is doing a pretty good job with that as well. So that's where we're going next, and then continuing to onboard more sources. Like I talked about, trying to bridge in the external world via market research through AI. So a lot of it is just trying to find ways to connect the system more places than than what we're doing. Amazing. Thank you. I have one more slide just to invite those of you listening, to join us to learn more if you liked what you hear, heard if you like what you heard and enjoyed hearing from Dan, there's, a couple of places for you to follow-up with this next. The first is, our relevance three sixty event, which is really our innovation showcase. We'll talk about the latest and greatest that we have to offer with generative answering. I mentioned those the fact that there's always updates and innovations happening with those underlying LLMs. And so we will talk about what we're doing to stay ahead of the curve there and just come, introduce net new capabilities as well. The second, of course, is we would love to see you at CAM World. Dan, maybe we can have that breakout session. But yeah. I'll be around. For I'll pick my brain on. I'm happy to talk about any of this stuff. I I love, happy to discuss search AI all day. So Amazing. So I think from here, we, we'll open it up to questions. We I see quite a few, so we'll do our best to Yeah. Work through. Well, thank you both, and and we do have some questions from our viewers. And just personally following on from what Dan just said, I do run the enterprise search and discovery portion of the Kilometers World Conference, and I am looking forward to having more of a conversation with Dan because he is speaking in that colocated conference with Kilometers World. But now to move on to actual questions from our viewers. So this one, Dan, I guess, is for you. How big a team do you have to support your Gen AI programs? As usual, not Or is it just you? So it is so it's mostly me as, like, the center point of it. Right? But I will say, so we have I have a team of developers, but they're not really working much on the gen AI other than implementing, like, turning it on in the interface. Right? But in terms of actually managing the model, managing the data, what goes into it, it's basically me, and then I've got, you know, probably eight to ten, key business partners around different parts of the business. Right? Policy, HR, support desk, etcetera, who I work with, and and they they then go to their content, stewards in their area to help work with them. So it's, you know, it's I'd say it's formally, it's me, basically. Informally, it's me with with, you know, a group of other folks who are very much, I'd say part of my, you know, virtual sort of unofficial team that help me with different parts of the content space. But, yeah, officially, it's me, and and I do my best to, be the gatekeeper of the the models, I guess. Okay. This is sort of a related question. How how do you work with knowledge management teams or SMEs, subject matter experts to keep the content accurate? Because I know you were talking about stale content. So how do you how do you handle all that? Yeah. It's a tricky it's a tricky situation. It's a great question. So I relied on my support desk quite a bit. So when we did our rollout, I would give my support desk a list of the sites that we were onboarding, and they would actually reach out to the owners of the sites and be like, we reviewed your site for you. We found some problems. Can you fix that? Can you fix it? Because they kind of know what content is valid and what is not to a degree where I don't always know. Right? I'm not the SME. So if they say, oh, yeah. Our sites up to date. I don't always know if they're just blowing smoke or or for telling the truth. So I would rely on my support desk a fair bit. And then as far as cam teams, you know, there are some parts of the company that have really evolved cam teams. I'm actually a member of, like, the cam team representing our I. T. Function for manufacturing. So I work closely with that team. And, like, there, they have a whole network of different SharePoint sites that we're gonna be using for our scientific search solution to train that model because they're, you know, kind of more around the manufacturing and the the way we manage our products. And there, yeah, I work very closely with a we have a there, I actually do have a good correspondent on that team, who manages my relationship essentially with them, and he works with all the stewards within his division. So kinda varies, but at the end of the day, a lot of it is relying on the expertise of others and the bandwidth of others who are already kind of in those roles to be the intermediaries that I know that they they'll listen to because they're coming from somewhere that, you know, kinda is giving them the the authority of why they need to listen to this this person. Yeah. Yeah. Yeah. Yeah. Yeah. And it also it doesn't it doesn't hurt that my management also is very AI focused, and it's every executive everything we have right now is AI AI. So people kinda know that when when I come talk to them or my manager will talk to them about AI stuff, they listen because they know it's a priority of our company from a executive leadership level down. It's been very clear in every town hall that this is something we need to do. Well, you're very fortunate. Okay. Let me yeah. Let let me throw this question over to Juanita. Should we solely focus on the measurement of user successes? Doesn't it make more sense to develop or apply some sort of qualitative research evaluation or assessment techniques instead of the qualitative methods? Quantitative methods. Yeah. Yeah. I think there, what I would answer is, I'm I actually mentioned both in terms of the three levels. The bottom most layer being those operational KPIs, like click through, success rate. The second level being more of that qualitative end user feedback, like, are employees happy? Are they getting the answers they need? What's the effort it takes to do that? And the top most, layer back to more of those concrete figures, of course, that our management and execs like to see, which is did this revenue did this impact revenues? Did this impact costs? So I think there's a mix of those that, I think together help to paint, a holistic picture. And so, I hope that answers that one. This one, I believe, is for you, Dan. Can you share the name of the tool that you mentioned to help find the, stale and duplicative content? Yeah. Sure. Yeah. It's it's, it's called Opus, o p u s, o p u s. It's from, AvPoint. We use AvPoint as a governance solution for our three sixty five platform in general, and I definitely recommend it if you're, like, looking for a way to make your, three sixty five less of a disaster. It was something that that Merck had had back in the day, essentially, that we kind of adopted as part of spinning up, but I'm really glad we use it. We use it for a ton of things. So anytime you, like, wanna request a new site or a new, Veeva engage slash Yammer or Teams, everything is managed through AvePoint. All the business processes are controlled in there. We use their policies and insights. We use that to help, like, identify, potentially problematic content. So when you roll out this AI stuff, like, especially this is more of a Copilot thing than it's a search AI thing. But if you wanna roll out something like Copilot that's gonna look at your entire three sixty five world and use that content, something like AvPoint to, like, figure out where are my risks, where are my issues, where are my problems, it's really, really valuable. So we use it heavily for that. Opus is a newer thing they're offering that we're I I saw this morning, we had a a meeting with them to kind of run through a report, and it was very eye opening to see how many issues it was able to flag. Now it's we have to figure out how to, like, build policies and procedures and everything around that. But, yeah, the tool, Appoint Opus, but Appoint in general, it's a strong, governance tool that makes your three sixty five less of a, wild west, I guess. Less of a wild west. Okay then. Okay. Let's see what else we've got here. Or did you wanna add something, Juanita? No? Okay. I don't know. Okay. Okay. Here we've got a comment. Our company's leadership is the locomotive drivers for AI, for the for the AI hypertrain. That's a good analogy there. How can I convince them that Kilometers governance and establishing processes need to be established first before going heavy on AI? I'll I'll take a first stab at this point. Maybe maybe you can add some more onto it. I would just say that, you know, like, in my experience, right, it's it's garbage in garbage out. Right? So if you haven't taken the time to govern your content, you will not solve it by just throwing AI in front of it. Right? You're and and in fact, what you're gonna do is you're gonna you're gonna find the problems real fast because you you by not doing that prework to regulate how you're gonna train your model and making sure that you're, like, putting stuff in there that you think is gonna create a good experience, you're actually gonna, if anything, like, magnify your problem. You're gonna show very quickly where your issues are. So I'm not you know, I don't know. We're lucky, right, that my leadership, I think, understands that. And when like, our policy team came to me last year, and they said, we wanna put AI on top of our policy searches. It's gonna fix everything. And I said, your content is in forty five different places. It's not tagged. It's not managed. You will never solve this problem. Right? So and what what I did and they understood. We had a good discussion. We agreed. I rolled out a kind of a policy search feature as part of what we just did, about, two months ago, I guess, on our enterprise search where we finally kind of captured what is policy content. Right? And now that we have it in the index in a structured way, now we're talking about how do we layer AI on top of that. So, yeah, if you don't if you don't do the prework, I mean, all I can say is you are going to it's it's almost worse because you're going to magnify the problems you have. Actually, if I can add to that, Dan, I think what you should do is show them a site with AI and the kilometers or content in its current form and see how good the answer or the results are. And if you show them the bad answer or hallucination or a wrong result obviously, you don't do this in the live site on your production. You can do a test site for them. But maybe showing them what bad looks like and what will happen if you don't, you know, clean and and take the proper steps, that's another way to get them to, you know, say like, oh, I get it. I see what's happening here and why this is important. Yeah. I love that. Yeah. And and like I said, doing these pilots was not difficult from a timing perspective, from a cost perspective. It was really simple for us to do, like, with Coveo anyway, to do a pilot. Right? It was very little investment from our side to be able to get something up and running and show something immediately. And like I said, in that pilot, we found those issues with, like, the the, obsolete files in that one site. So, okay, the tool's working great, but, you know, this is the kind of stuff we're gonna have to deal with. And that made us aware that we couldn't just turn this on. We didn't have to spend six months doing a little bit of content stewardship and, you know, cleanup before we actually flip the switch. Yeah. I'm interested, Juanita, because this this this discussion goes back to some of your very first slides. And I'm wondering from your perspective, is is this still happening that people just wanna throw AI at something and say, okay. That'll fix everything? Or have they calm down? I think I think right now, everyone's kind of in a, like, hesitant, not sure. Maybe, actually, you know, those those headlines I showed you, like, Gen AI or AI is not paying off, there could be because people are not really, understanding what it it takes to make this stuff work and how content and knowledge management practices are really important. And so the other thing I'll say is I feel like, you know, companies feel like they have to do this alone and and they have to figure it out alone and obviously, that's not the case. You have vendors like us and others depending on what you're trying to, do and establish that can help you speed these things up. So, yeah, I feel like maybe management sometimes doesn't always understand the details of what goes into this. So this is where we need to be creative to show them, and get them to pay attention a little bit more. Yeah. Yeah. Absolutely. Absolutely. Well, that is all the time we have for questions today, and we apologize. We were unable to get to all of your questions. But as I stated earlier, all questions will be answered via email. And I would like to thank our speakers today, Daniel v Shapiro, associate director, enterprise knowledge and content management, Organon Waneja Olguin, senior director of product marketing platform solutions, Coveo. If you would like to review this event or send it to a colleague, please use the same URL that you used for today's event. It will be archived for ninety days. Plus, you will receive an email with the URL to view the webinar once the archive is posted. If you would like a PDF of the deck, go to the handout section once the archive is live. Thank you again for joining us.
Register to watch the video

Clean Data, Smarter AI: The Impacts of KM on GenAI Effectiveness

Key webinar highlights:
  • Ways to unify and amplify knowledge across platforms like ServiceNow, SharePoint and Google Drive
  • Proven strategies for refining generated outputs to minimize inaccuracies and improve self-service resolution while reducing cost-to-serve
  • Techniques for leveraging analytics to drive continuous improvement and user satisfaction
  • Practical insights from Coveo's enterprise customers at Xero, F5 and Forcepoint
Juanita Olguin
Senior Director, Product Marketing, Coveo
drift close

Hey 👋! Any questions? I can have a teammate jump in on chat right now!

drift bot
1